47 research outputs found

    Robust T-Loss for Medical Image Segmentation

    Full text link
    This paper presents a new robust loss function, the T-Loss, for medical image segmentation. The proposed loss is based on the negative log-likelihood of the Student-t distribution and can effectively handle outliers in the data by controlling its sensitivity with a single parameter. This parameter is updated during the backpropagation process, eliminating the need for additional computation or prior information about the level and spread of noisy labels. Our experiments show that the T-Loss outperforms traditional loss functions in terms of dice scores on two public medical datasets for skin lesion and lung segmentation. We also demonstrate the ability of T-Loss to handle different types of simulated label noise, resembling human error. Our results provide strong evidence that the T-Loss is a promising alternative for medical image segmentation where high levels of noise or outliers in the dataset are a typical phenomenon in practice. The project website can be found at https://robust-tloss.github.ioComment: Early accepted to MICCAI 202

    SelfClean: A Self-Supervised Data Cleaning Strategy

    Full text link
    Most benchmark datasets for computer vision contain irrelevant images, near duplicates, and label errors. Consequently, model performance on these benchmarks may not be an accurate estimate of generalization capabilities. This is a particularly acute concern in computer vision for medicine where datasets are typically small, stakes are high, and annotation processes are expensive and error-prone. In this paper we propose SelfClean, a general procedure to clean up image datasets exploiting a latent space learned with self-supervision. By relying on self-supervised learning, our approach focuses on intrinsic properties of the data and avoids annotation biases. We formulate dataset cleaning as either a set of ranking problems, which significantly reduce human annotation effort, or a set of scoring problems, which enable fully automated decisions based on score distributions. We demonstrate that SelfClean achieves state-of-the-art performance in detecting irrelevant images, near duplicates, and label errors within popular computer vision benchmarks, retrieving both injected synthetic noise and natural contamination. In addition, we apply our method to multiple image datasets and confirm an improvement in evaluation reliability

    Cable Tree Wiring -- Benchmarking Solvers on a Real-World Scheduling Problem with a Variety of Precedence Constraints

    Get PDF
    Cable trees are used in industrial products to transmit energy and information between different product parts. To this date, they are mostly assembled by humans and only few automated manufacturing solutions exist using complex robotic machines. For these machines, the wiring plan has to be translated into a wiring sequence of cable plugging operations to be followed by the machine. In this paper, we study and formalize the problem of deriving the optimal wiring sequence for a given layout of a cable tree. We summarize our investigations to model this cable tree wiring Problem (CTW) as a traveling salesman problem with atomic, soft atomic, and disjunctive precedence constraints as well as tour-dependent edge costs such that it can be solved by state-of-the-art constraint programming (CP), Optimization Modulo Theories (OMT), and mixed-integer programming (MIP) solvers. It is further shown, how the CTW problem can be viewed as a soft version of the coupled tasks scheduling problem. We discuss various modeling variants for the problem, prove its NP-hardness, and empirically compare CP, OMT, and MIP solvers on a benchmark set of 278 instances. The complete benchmark set with all models and instance data is available on github and is accepted for inclusion in the MiniZinc challenge 2020

    Towards Reliable Dermatology Evaluation Benchmarks

    Full text link
    Benchmark datasets for digital dermatology unwittingly contain inaccuracies that reduce trust in model performance estimates. We propose a resource-efficient data cleaning protocol to identify issues that escaped previous curation. The protocol leverages an existing algorithmic cleaning strategy and is followed by a confirmation process terminated by an intuitive stopping criterion. Based on confirmation by multiple dermatologists, we remove irrelevant samples and near duplicates and estimate the percentage of label errors in six dermatology image datasets for model evaluation promoted by the International Skin Imaging Collaboration. Along with this paper, we publish revised file lists for each dataset which should be used for model evaluation. Our work paves the way for more trustworthy performance assessment in digital dermatology.Comment: Link to the revised file lists: https://github.com/Digital-Dermatology/SelfClean-Revised-Benchmark

    a planned ancillary analysis of the coVAPid cohort

    Get PDF
    Funding: This study was supported in part by a grant from the French government through the «Programme Investissement d’Avenir» (I-SITE ULNE) managed by the Agence Nationale de la Recherche (coVAPid project). The funders of the study had no role in the study design, data collection, analysis, or interpreta tion, writing of the report, or decision to submit for publication.BACKGROUND: Patients with SARS-CoV-2 infection are at higher risk for ventilator-associated pneumonia (VAP). No study has evaluated the relationship between VAP and mortality in this population, or compared this relationship between SARS-CoV-2 patients and other populations. The main objective of our study was to determine the relationship between VAP and mortality in SARS-CoV-2 patients. METHODS: Planned ancillary analysis of a multicenter retrospective European cohort. VAP was diagnosed using clinical, radiological and quantitative microbiological criteria. Univariable and multivariable marginal Cox's regression models, with cause-specific hazard for duration of mechanical ventilation and ICU stay, were used to compare outcomes between study groups. Extubation, and ICU discharge alive were considered as events of interest, and mortality as competing event. FINDINGS: Of 1576 included patients, 568 were SARS-CoV-2 pneumonia, 482 influenza pneumonia, and 526 no evidence of viral infection at ICU admission. VAP was associated with significantly higher risk for 28-day mortality in SARS-CoV-2 (adjusted HR 1.70 (95% CI 1.16-2.47), p = 0.006), and influenza groups (1.75 (1.03-3.02), p = 0.045), but not in the no viral infection group (1.07 (0.64-1.78), p = 0.79). VAP was associated with significantly longer duration of mechanical ventilation in the SARS-CoV-2 group, but not in the influenza or no viral infection groups. VAP was associated with significantly longer duration of ICU stay in the 3 study groups. No significant difference was found in heterogeneity of outcomes related to VAP between the 3 groups, suggesting that the impact of VAP on mortality was not different between study groups. INTERPRETATION: VAP was associated with significantly increased 28-day mortality rate in SARS-CoV-2 patients. However, SARS-CoV-2 pneumonia, as compared to influenza pneumonia or no viral infection, did not significantly modify the relationship between VAP and 28-day mortality. CLINICAL TRIAL REGISTRATION: The study was registered at ClinicalTrials.gov, number NCT04359693.publishersversionpublishe

    Ischemia-Reperfusion Injury and Pregnancy Initiate Time-Dependent and Robust Signs of Up-Regulation of Cardiac Progenitor Cells

    Get PDF
    To explore how cardiac regeneration and cell turnover adapts to disease, different forms of stress were studied for their effects on the cardiac progenitor cell markers c-Kit and Isl1, the early cardiomyocyte marker Nkx2.5, and mast cells. Adult female rats were examined during pregnancy, after myocardial infarction and ischemia-reperfusion injury with/out insulin like growth factor-1(IGF-1) and hepatocyte growth factor (HGF). Different cardiac sub-domains were analyzed at one and two weeks post-intervention, both at the mRNA and protein levels. While pregnancy and myocardial infarction up-regulated Nkx2.5 and c-Kit (adjusted for mast cell activation), ischemia-reperfusion injury induced the strongest up-regulation which occurred globally throughout the entire heart and not just around the site of injury. This response seems to be partly mediated by increased endogenous production of IGF-1 and HGF. Contrary to c-Kit, Isl1 was not up-regulated by pregnancy or myocardial infarction while ischemia-reperfusion injury induced not a global but a focal up-regulation in the outflow tract and also in the peri-ischemic region, correlating with the up-regulation of endogenous IGF-1. The addition of IGF-1 and HGF did boost the endogenous expression of IGF and HGF correlating to focal up-regulation of Isl1. c-Kit expression was not further influenced by the exogenous growth factors. This indicates that there is a spatial mismatch between on one hand c-Kit and Nkx2.5 expression and on the other hand Isl1 expression. In conclusion, ischemia-reperfusion injury was the strongest stimulus with both global and focal cardiomyocyte progenitor cell marker up-regulations, correlating to the endogenous up-regulation of the growth factors IGF-1 and HGF. Also pregnancy induced a general up-regulation of c-Kit and early Nkx2.5+ cardiomyocytes throughout the heart. Utilization of these pathways could provide new strategies for the treatment of cardiac disease

    Generic Solution Construction in Valuation-Based Systems

    No full text
    Valuation algebras abstract a large number of formalisms for automated reasoning and enable the definition of generic inference procedures. Many of these formalisms provide some notions of solutions. Typical examples are satisfying assignments in constraint systems, models in logics or solutions to linear equation systems. Contrary to inference, there is no general algorithm to compute solutions in arbitrary valuation algebras. This paper states formal requirements for the presence of solutions and proposes a generic algorithm for solution construction based on the results of a previously executed inference scheme. We study the application of generic solution construction to semiring constraint systems, sparse linear systems and algebraic path problems and show that the proposed method generalizes various existing approaches for specific formalisms in the literature

    Generalized Information Theory based on the Theory of Hints

    No full text
    The aggregate uncertainty is the only known functional for Dempster-Shafer theory that generalizes the Shannon and Hartley mea- sures and satis?es all classical requirements for uncertainty measures, including subadditivity. Although being posed several times in the liter- ature, it is still an open problem whether the aggregate uncertainty is unique under these properties. This paper derives an uncertainty measure based on the theory of hints and shows its equivalence to the pignistic entropy. It does not satisfy subadditivity, but the viewpoint of hints un- covers a weaker version of subadditivity. On the other hand, the pignistic entropy has some crucial advantages over the aggregate uncertainty. i.e. explicitness of the formula and sensitivity to changes in evidence. We observe that neither of the two measures captures the full uncertainty of hints and propose an extension of the pignistic entropy called hints en- tropy that satis?es all axiomatic requirements, including subadditivity, while preserving the above advantages over the aggregate uncertainty
    corecore